本文重新访问了符号回归的数据集和评估标准,该任务是使用数学方程式表达给定数据的任务,特别关注其科学发现的潜力。专注于基于Feynman物理学讲座的现有数据集中使用的一组公式,我们重新创建了120个数据集,以讨论科学发现(SRSD)符号回归的性能。对于120个SRSD数据集中的每个数据集,我们仔细查看公式及其变量的属性,以设计合理逼真的值的值范围,以便可以使用我们的新SRSD数据集来评估SRSD的潜力,例如SR方法是否是SR方法con(re)从此类数据集中发现物理定律。作为评估度量,我们还建议在预测方程和地面方程树之间使用归一化的编辑距离。虽然现有指标是目标值和SR模型之间的二进制或误差,但标准化的编辑距离评估了地面真相和预测方程树之间的相似性。我们已经使用SRBENCH中的五种最先进的SR方法在新的SRSD数据集上进行了实验,并基于最新的变压器体系结构进行了简单的基线。结果表明,我们提供了更现实的性能评估,并为科学发现开辟了新的基于机器学习的方法。我们的数据集和代码存储库公开可用。
translated by 谷歌翻译
Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this paper, we explore the following research question: How can we make the AS2 models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (named CERBERUS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of heterogeneous transformer models by using just a few extra parameters. We show the effectiveness of CERBERUS on three English datasets for AS2; our proposed approach outperforms all single-model distillations we consider, rivaling the state-of-the-art large AS2 models that have 2.7x more parameters and run 2.5x slower. Code for our model is available at https://github.com/amazon-research/wqa-cerberus
translated by 谷歌翻译
尽管关键任务应用需要使用深神经网络(DNN),但它们在移动设备的连续执行导致能耗的显着增加。虽然边缘卸载可以降低能量消耗,但信道质量,网络和边缘服务器负载中的不稳定模式可能导致系统的关键操作严重中断。一种被称为分割计算的替代方法,在模型中生成压缩表示(称为“瓶颈”),以降低带宽使用和能量消耗。事先工作已经提出了引入额外层的方法,以损害能耗和潜伏期。因此,我们提出了一个名为BoleFit的新框架,除了有针对性的DNN架构修改之外,还包括一种新颖的培训策略,即使具有强大的压缩速率,即使具有强大的压缩速率也能实现高精度。我们在图像分类中施加瓶装装饰品,并显示瓶装装备在想象中数据集中实现了77.1%的数据压缩,高达0.6%的精度损耗,而诸如Spinn的最佳精度高达6%。我们通过实验测量在NVIDIA Jetson Nano板(基于GPU)和覆盆子PI板上运行的图像分类应用的功耗和等待时间(GPU - 更低)。我们表明,对于(W.R.T.)本地计算分别降低了高达49%和89%的功耗和延迟,局部计算和37%和55%W.r.t.t.边缘卸载。我们还比较了具有基于最先进的自动化器的方法的瓶装方法,并显示了(i)瓶子分别将功耗和执行时间降低了高达54%和44%,覆盆子上的40%和62% pi; (ii)在移动设备上执行的头部模型的大小为83倍。代码存储库将被公布以获得结果的完全可重复性。
translated by 谷歌翻译
诸如智能手机和自治车辆的移动设备越来越依赖深神经网络(DNN)来执行复杂的推理任务,例如图像分类和语音识别等。但是,在移动设备上连续执行整个DNN可以快速消耗其电池。虽然任务卸载到云/边缘服务器可能会降低移动设备的计算负担,但信道质量,网络和边缘服务器负载中的不稳定模式可能导致任务执行的显着延迟。最近,已经提出了基于分割计算(SC)的方法,其中DNN被分成在移动设备上和边缘服务器上执行的头部和尾模型。最终,这可能会降低带宽使用以及能量消耗。另一种叫做早期退出(EE)的方法,列车模型在架构中呈现多个“退出”,每个都提供越来越高的目标准确性。因此,可以根据当前条件或应用需求进行准确性和延迟之间的权衡。在本文中,我们通过呈现最相关方法的比较,对SC和EE策略进行全面的综合调查。我们通过提供一系列引人注目的研究挑战来结束论文。
translated by 谷歌翻译
We consider task allocation for multi-object transport using a multi-robot system, in which each robot selects one object among multiple objects with different and unknown weights. The existing centralized methods assume the number of robots and tasks to be fixed, which is inapplicable to scenarios that differ from the learning environment. Meanwhile, the existing distributed methods limit the minimum number of robots and tasks to a constant value, making them applicable to various numbers of robots and tasks. However, they cannot transport an object whose weight exceeds the load capacity of robots observing the object. To make it applicable to various numbers of robots and objects with different and unknown weights, we propose a framework using multi-agent reinforcement learning for task allocation. First, we introduce a structured policy model consisting of 1) predesigned dynamic task priorities with global communication and 2) a neural network-based distributed policy model that determines the timing for coordination. The distributed policy builds consensus on the high-priority object under local observations and selects cooperative or independent actions. Then, the policy is optimized by multi-agent reinforcement learning through trial and error. This structured policy of local learning and global communication makes our framework applicable to various numbers of robots and objects with different and unknown weights, as demonstrated by numerical simulations.
translated by 谷歌翻译
In this paper, we present a solution to a design problem of control strategies for multi-agent cooperative transport. Although existing learning-based methods assume that the number of agents is the same as that in the training environment, the number might differ in reality considering that the robots' batteries may completely discharge, or additional robots may be introduced to reduce the time required to complete a task. Therefore, it is crucial that the learned strategy be applicable to scenarios wherein the number of agents differs from that in the training environment. In this paper, we propose a novel multi-agent reinforcement learning framework of event-triggered communication and consensus-based control for distributed cooperative transport. The proposed policy model estimates the resultant force and torque in a consensus manner using the estimates of the resultant force and torque with the neighborhood agents. Moreover, it computes the control and communication inputs to determine when to communicate with the neighboring agents under local observations and estimates of the resultant force and torque. Therefore, the proposed framework can balance the control performance and communication savings in scenarios wherein the number of agents differs from that in the training environment. We confirm the effectiveness of our approach by using a maximum of eight and six robots in the simulations and experiments, respectively.
translated by 谷歌翻译
Humans demonstrate a variety of interesting behavioral characteristics when performing tasks, such as selecting between seemingly equivalent optimal actions, performing recovery actions when deviating from the optimal trajectory, or moderating actions in response to sensed risks. However, imitation learning, which attempts to teach robots to perform these same tasks from observations of human demonstrations, often fails to capture such behavior. Specifically, commonly used learning algorithms embody inherent contradictions between the learning assumptions (e.g., single optimal action) and actual human behavior (e.g., multiple optimal actions), thereby limiting robot generalizability, applicability, and demonstration feasibility. To address this, this paper proposes designing imitation learning algorithms with a focus on utilizing human behavioral characteristics, thereby embodying principles for capturing and exploiting actual demonstrator behavioral characteristics. This paper presents the first imitation learning framework, Bayesian Disturbance Injection (BDI), that typifies human behavioral characteristics by incorporating model flexibility, robustification, and risk sensitivity. Bayesian inference is used to learn flexible non-parametric multi-action policies, while simultaneously robustifying policies by injecting risk-sensitive disturbances to induce human recovery action and ensuring demonstration feasibility. Our method is evaluated through risk-sensitive simulations and real-robot experiments (e.g., table-sweep task, shaft-reach task and shaft-insertion task) using the UR5e 6-DOF robotic arm, to demonstrate the improved characterisation of behavior. Results show significant improvement in task performance, through improved flexibility, robustness as well as demonstration feasibility.
translated by 谷歌翻译
生成的对抗性模仿学习(GAIL)可以学习政策,而无需明确定义示威活动的奖励功能。盖尔有可能学习具有高维观测值的政策,例如图像。通过将Gail应用于真正的机器人,也许可以为清洗,折叠衣服,烹饪和清洁等日常活动获得机器人政策。但是,由于错误,人类示范数据通常是不完美的,这会降低由此产生的政策的表现。我们通过关注以下功能来解决此问题:1)许多机器人任务是目标任务,而2)在演示数据中标记此类目标状态相对容易。考虑到这些,本文提出了目标感知的生成对抗性模仿学习(GA-GAIL),该学习通过引入第二个歧视者来训练政策,以与指示演示数据的第一个歧视者并行区分目标状态。这扩展了一个标准的盖尔框架,即使通过促进实现目标状态的目标状态歧视者,甚至可以从不完美的演示中学习理想的政策。此外,GA-GAIL采用熵最大化的深层P-NETWORK(EDPN)作为发电机,该发电机考虑了策略更新中的平滑度和因果熵,以从两个歧视者中获得稳定的政策学习。我们提出的方法成功地应用于两项真正的布料操作任务:将手帕翻过来折叠衣服。我们确认它在没有特定特定任务奖励功能设计的情况下学习了布料操作政策。实际实验的视频可在https://youtu.be/h_nii2ooure上获得。
translated by 谷歌翻译
大量量化在线用户活动数据,例如每周网络搜索量,这些数据与几个查询和位置的相互影响共同进化,是一个重要的社交传感器。通过从此类数据中发现潜在的相互作用,即每个查询之间的生态系统和每个区域之间的影响流,可以准确预测未来的活动。但是,就数据数量和涵盖动力学的复杂模式而言,这是一个困难的问题。为了解决这个问题,我们提出了FluxCube,这是一种有效的采矿方法,可预测大量共同发展的在线用户活动并提供良好的解释性。我们的模型是两个数学模型的组合的扩展:一个反应扩散系统为建模局部群体之间的影响流和生态系统建模的框架提供了一个模拟每个查询之间的潜在相互作用。同样,通过利用物理知识的神经网络的概念,FluxCube可以共同获得从参数和高预测性能获得的高解释性。在实际数据集上进行的广泛实验表明,从预测准确性方面,FluxCube优于可比较的模型,而FluxCube中的每个组件都会有助于增强性能。然后,我们展示了一些案例研究,即FluxCube可以在查询和区域组之间提取有用的潜在相互作用。
translated by 谷歌翻译
用域随机化的深度强化学习在各种模拟中以随机物理和传感器模型参数学习了控制策略,以在零照片的环境中转移到现实世界。但是,由于策略更新的不稳定,当随机参数的范围广泛时,通常需要大量样本来学习有效的政策。为了减轻此问题,我们提出了一种名为环状策略蒸馏(CPD)的样品效率方法。 CPD将随机参数的范围分为几个小子域,并为每个子域分配局部策略。然后,在{\ it循环}将目标子域转变为相邻子域并使用单调策略改善方案来利用邻居子域的学习值/策略时,进行了本地策略的学习。最后,所有博学的本地政策都被蒸馏到SIM到现实转移的全球政策中。 CPD的有效性和样品效率通过四个任务(来自Mujoco的Openaigym和Pusher,游泳者和HalfCheetah的钟形)的模拟来证明,以及一项现实机器人球派遣任务。
translated by 谷歌翻译